Relative general position

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural encoding of relative position.

Late ventral visual areas generally consist of cells having a significant degree of translation invariance. Such a "bag of features" representation is useful for the recognition of individual objects; however, it seems unable to explain our ability to parse a scene into multiple objects and to understand their spatial relationships. We review several schemes (e.g., global features and serial at...

متن کامل

On the relative efficiency in general network structures

Data Envelopment Analysis (DEA) is an eciency measurement tool for evaluation of similar Decision Making Units (DMUs). In DEA, weights are assigned to inputs and outputs and the absolute eciency score is obtained by the ratio of weighted sum of outputs to weighted sum of inputs. In traditional DEA models, this measure is also equivalent with relative eciency score which evaluates DMUs in compar...

متن کامل

Cost-Benefit Analysis and Relative Position

there is no reason to think that the losers, in terms of relative position, had any entitlement to their antecedent relative position; and we are not, after all, speaking of redistribution from poor to rich. On distributional concerns and cost-benefit analysis, see various papers in Symposium, J. Legal Stud.

متن کامل

Qualitative spatial reasoning about relative point position

Qualitative spatial reasoning (QSR) abstracts metrical details of the physical world. The two main directions in QSR are topological reasoning about regions and reasoning about orientations of point configurations. Orientations can refer to a global reference system, e.g. cardinal directions or instead only to relative orientation, e.g. egocentric views. Reasoning about relative orientations po...

متن کامل

Self-Attention with Relative Position Representations

Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pacific Journal of Mathematics

سال: 1966

ISSN: 0030-8730,0030-8730

DOI: 10.2140/pjm.1966.18.513